27 research outputs found

    The CHREST architecture of cognition : the role of perception in general intelligence

    Get PDF
    Original paper can be found at: http://www.atlantis-press.com/publications/aisr/AGI-10/ Copyright Atlantis Press. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits non-commercial use, distribution and reproduction in any medium, provided the original work is properly cited.This paper argues that the CHREST architecture of cognition can shed important light on developing artificial general intelligence. The key theme is that "cognition is perception." The description of the main components and mechanisms of the architecture is followed by a discussion of several domains where CHREST has already been successfully applied, such as the psychology of expert behaviour, the acquisition of language by children, and the learning of multiple representations in physics. The characteristics of CHREST that enable it to account for empirical data include: self-organisation, an emphasis on cognitive limitations, the presence of a perception-learning cycle, and the use of naturalistic data as input for learning. We argue that some of these characteristics can help shed light on the hard questions facing theorists developing artificial general intelligence, such as intuition, the acquisition and use of concepts and the role of embodiment

    Tackling the PAN’09 External Plagiarism Detection Corpus with a Desktop Plaigiarism Detector

    Get PDF
    Ferret is a fast and eïŹ€ective tool for detecting similarities in a group of ïŹles. Applying it to the PAN’09 corpus required modiïŹcations to meet the requirements of the competition, mainly to deal with the very large number of ïŹles, the large size of some of them, and to automate some of the decisions that would normally be made by a human operator. Ferret was able to detect numerous ïŹles in the development corpus that contain substantial similarities not marked as plagiarism, but it also identiïŹed quite a lot of pairs where random similarities masked actual plagiarism. An improved metric is therefore indicated if the “plagiarised” or “not plagiarised” decision is to be automated

    Developing and Evaluating Cognitive Architectures with Behavioural Tests

    Get PDF
    http://www.aaai.org/Press/Reports/Workshops/ws-07-04.phpWe present a methodology for developing and evaluating cognitive architectures based on behavioural tests and suitable optimisation algorithms. Behavioural tests are used to clarify those aspects of an architecture's implementation which are critical to that theory. By fitting the performance of the architecture to observed behaviour, values for the architecture's parameters can be automatically obtained, and information can be derived about how components of the architecture relate to performance. Finally, with an appropriate optimisation algorithm, different cognitive architectures can be evaluated, and their performances compared on multiple tasks.Peer reviewe

    A Methodology for Developing Computational Implementations of Scientific Theories

    Get PDF
    “This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." “Copyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.”Computer programs have become a popular representation for scientific theories, particularly for implementing models or simulations of observed phenomena. Expressing a theory as an executable computer program provides many benefits, including: making all processes concrete, supporting the development of specific models, and hence enabling quantitative predictions to be derived from the theory. However, as implementations of scientific theories, these computer programs will be subject to change and modification. As programs change, their behaviour will also change, and ensuring continuity in the scientific value of the program is difficult. We propose a methodology for developing computer software implementing scientific theories. This methodology allows the developer to continuously change and extend their software, whilst alerting the developer to any changes in its scientific interpretation. We introduce tools for managing this development process, as well as for optimising the developed models

    Building models of learning and expertise with CHREST

    Get PDF
    CHREST (Chunk Hierarchy and REtrieval STructures) is a complete computational architecture implementing processes of learning and perception. CHREST models have successfully simulated human data in a variety of domains, such as the acquisition of syntactic categories, expertise in programming and in chess, concept formation, implicit learning, and the acquisition of multiple representations in physics for problem solving. In this tutorial, we describe the learning, perception and attention mechanisms within CHREST as well as key empirical data captured by CHREST models. Apart from the theoretical material, this tutorial also introduces participants to an implementation of CHREST and its use in a variety of domains. Material and examples are provided so participants can adapt and extend the CHREST architecture

    An Introduction to Slice-Based Cohesion and Coupling Metrics

    Get PDF
    This report provides an overview of slice-based software metrics. It brings together information about the development of the metrics from Weiser’s original idea that program slices may be used in the measurement of program complexity, with alternative slice-based measures proposed by other researchers. In particular, it details two aspects of slice-based metric calculation not covered elsewhere in the literature: output variables and worked examples of the calculations. First, output variables are explained, their use explored and standard reference terms and usage proposed. Calculating slice-based metrics requires a clear understanding of ‘output variables’ because they form the basis for extracting the program slices on which the calculations depend. This report includes a survey of the variation in the definition of output variables used by different research groups and suggests standard terms of reference for these variables. Our study identifies four elements which are combined in the definition of output variables. These are the function return value, modified global variables, modified reference parameters and variables printed or otherwise output by the module. Second, slice-based metric calculations are explained with the aid of worked examples, to assist newcomers to the field. Step-by-step calculations of slice-based cohesion and coupling metrics based on the vertices output by the static analysis tool CodeSurfer (R) are presented and compared with line-based calculations

    Discovering predictive variables when evolving cognitive models

    Get PDF
    A non-dominated sorting genetic algorithm is used to evolve models of learning from different theories for multiple tasks. Correlation analysis is performed to identify parameters which affect performance on specific tasks; these are the predictive variables. Mutation is biased so that changes to parameter values tend to preserve values within the population's current range. Experimental results show that optimal models are evolved, and also that uncovering predictive variables is beneficial in improving the rate of convergence

    Attention mechanisms in the CHREST cognitive architecture

    Get PDF
    In this paper, we describe the attention mechanisms in CHREST, a computational architecture of human visual expertise. CHREST organises information acquired by direct experience from the world in the form of chunks. These chunks are searched for, and verified, by a unique set of heuristics, comprising the attention mechanism. We explain how the attention mechanism combines bottom-up and top-down heuristics from internal and external sources of information. We describe some experimental evidence demonstrating the correspondence of CHREST’s perceptual mechanisms with those of human subjects. Finally, we discuss how visual attention can play an important role in actions carried out by human experts in domains such as chess

    Linking working memory and long-term memory: A computational model of the learning of new words

    Get PDF
    The nonword repetition (NWR) test has been shown to be a good predictor of children’s vocabulary size. NWR performance has been explained using phonological working memory, which is seen as a critical component in the learning of new words. However, no detailed specification of the link between phonological working memory and long-term memory (LTM) has been proposed. In this paper, we present a computational model of children’s vocabulary acquisition (EPAM-VOC) that specifies how phonological working memory and LTM interact. The model learns phoneme sequences, which are stored in LTM and mediate how much information can be held in working memory. The model’s behaviour is compared with that of children in a new study of NWR, conducted in order to ensure the same nonword stimuli and methodology across ages. EPAM-VOC shows a pattern of results similar to that of children: performance is better for shorter nonwords and for wordlike nonwords, and performance improves with age. EPAM-VOC also simulates the superior performance for single consonant nonwords over clustered consonant nonwords found in previous NWR studies. EPAM-VOC provides a simple and elegant computational account of some of the key processes involved in the learning of new words: it specifies how phonological working memory and LTM interact; makes testable predictions; and suggests that developmental changes in NWR performance may reflect differences in the amount of information that has been encoded in LTM rather than developmental changes in working memory capacity. Keywords: EPAM, working memory, long-term memory, nonword repetition, vocabulary acquisition, developmental change
    corecore